126 research outputs found
A Toom rule that increases the thickness of sets
Toom's north-east-self voting cellular automaton rule R is known to suppress
small minorities. A variant which we call R^+ is also known to turn an
arbitrary initial configuration into a homogenous one (without changing the
ones that were homogenous to start with). Here we show that R^+ always
increases a certain property of sets called thickness. This result is intended
as a step towards a proof of the fast convergence towards consensus under R^+.
The latter is observable experimentally, even in the presence of some noise.Comment: 16 pages, 8 figure
Density classification on infinite lattices and trees
Consider an infinite graph with nodes initially labeled by independent
Bernoulli random variables of parameter p. We address the density
classification problem, that is, we want to design a (probabilistic or
deterministic) cellular automaton or a finite-range interacting particle system
that evolves on this graph and decides whether p is smaller or larger than 1/2.
Precisely, the trajectories should converge to the uniform configuration with
only 0's if p1/2. We present solutions to that problem
on the d-dimensional lattice, for any d>1, and on the regular infinite trees.
For Z, we propose some candidates that we back up with numerical simulations
Solomonoff Induction Violates Nicod's Criterion
Nicod's criterion states that observing a black raven is evidence for the
hypothesis H that all ravens are black. We show that Solomonoff induction does
not satisfy Nicod's criterion: there are time steps in which observing black
ravens decreases the belief in H. Moreover, while observing any computable
infinite string compatible with H, the belief in H decreases infinitely often
when using the unnormalized Solomonoff prior, but only finitely often when
using the normalized Solomonoff prior. We argue that the fault is not with
Solomonoff induction; instead we should reject Nicod's criterion.Comment: ALT 201
Prediction and Generation of Binary Markov Processes: Can a Finite-State Fox Catch a Markov Mouse?
Understanding the generative mechanism of a natural system is a vital
component of the scientific method. Here, we investigate one of the fundamental
steps toward this goal by presenting the minimal generator of an arbitrary
binary Markov process. This is a class of processes whose predictive model is
well known. Surprisingly, the generative model requires three distinct
topologies for different regions of parameter space. We show that a previously
proposed generator for a particular set of binary Markov processes is, in fact,
not minimal. Our results shed the first quantitative light on the relative
(minimal) costs of prediction and generation. We find, for instance, that the
difference between prediction and generation is maximized when the process is
approximately independently, identically distributed.Comment: 12 pages, 12 figures;
http://csc.ucdavis.edu/~cmg/compmech/pubs/gmc.ht
On the Computability of Solomonoff Induction and Knowledge-Seeking
Solomonoff induction is held as a gold standard for learning, but it is known
to be incomputable. We quantify its incomputability by placing various flavors
of Solomonoff's prior M in the arithmetical hierarchy. We also derive
computability bounds for knowledge-seeking agents, and give a limit-computable
weakly asymptotically optimal reinforcement learning agent.Comment: ALT 201
Fixed Point and Aperiodic Tilings
An aperiodic tile set was first constructed by R.Berger while proving the
undecidability of the domino problem. It turned out that aperiodic tile sets
appear in many topics ranging from logic (the Entscheidungsproblem) to physics
(quasicrystals) We present a new construction of an aperiodic tile set that is
based on Kleene's fixed-point construction instead of geometric arguments. This
construction is similar to J. von Neumann self-reproducing automata; similar
ideas were also used by P. Gacs in the context of error-correcting
computations. The flexibility of this construction allows us to construct a
"robust" aperiodic tile set that does not have periodic (or close to periodic)
tilings even if we allow some (sparse enough) tiling errors. This property was
not known for any of the existing aperiodic tile sets.Comment: v5: technical revision (positions of figures are shifted
Mean-field critical behaviour and ergodicity break in a nonequilibrium one-dimensional RSOS growth model
We investigate the nonequilibrium roughening transition of a one-dimensional
restricted solid-on-solid model by directly sampling the stationary probability
density of a suitable order parameter as the surface adsorption rate varies.
The shapes of the probability density histograms suggest a typical
Ginzburg-Landau scenario for the phase transition of the model, and estimates
of the "magnetic" exponent seem to confirm its mean-field critical behaviour.
We also found that the flipping times between the metastable phases of the
model scale exponentially with the system size, signaling the breaking of
ergodicity in the thermodynamic limit. Incidentally, we discovered that a
closely related model not considered before also displays a phase transition
with the same critical behaviour as the original model. Our results support the
usefulness of off-critical histogram techniques in the investigation of
nonequilibrium phase transitions. We also briefly discuss in an appendix a good
and simple pseudo-random number generator used in our simulations.Comment: LaTeX2e, 15 pages (large fonts and spacings), 5 figures. Accepted for
publication in the Int. J. Mod. Phys.
Cellular automata for the self-stabilisation of colourings and tilings
International audienceWe examine the problem of self-stabilisation, as introduced by Dijkstra in the 1970's, in the context of cellular automata stabilising on k-colourings, that is, on infinite grids which are coloured with k distinct colours in such a way that adjacent cells have different colours. Suppose that for whatever reason (e.g., noise, previous usage, tampering by an adversary), the colours of a finite number of cells in a valid k-colouring are modified, thus introducing errors. Is it possible to reset the system into a valid k-colouring with only the help of a local rule? In other words, is there a cellular automaton which, starting from any finite perturbation of a valid k-colouring, would always reach a valid k-colouring in finitely many steps? We discuss the different cases depending on the number of colours, and propose some deterministic and probabilistic rules which solve the problem for k = 3. We also explain why the case k = 3 is more delicate. Finally, we propose some insights on the more general setting of this problem, passing from k-colourings to other tilings (subshifts of finite type)
Algorithmic statistics: forty years later
Algorithmic statistics has two different (and almost orthogonal) motivations.
From the philosophical point of view, it tries to formalize how the statistics
works and why some statistical models are better than others. After this notion
of a "good model" is introduced, a natural question arises: it is possible that
for some piece of data there is no good model? If yes, how often these bad
("non-stochastic") data appear "in real life"?
Another, more technical motivation comes from algorithmic information theory.
In this theory a notion of complexity of a finite object (=amount of
information in this object) is introduced; it assigns to every object some
number, called its algorithmic complexity (or Kolmogorov complexity).
Algorithmic statistic provides a more fine-grained classification: for each
finite object some curve is defined that characterizes its behavior. It turns
out that several different definitions give (approximately) the same curve.
In this survey we try to provide an exposition of the main results in the
field (including full proofs for the most important ones), as well as some
historical comments. We assume that the reader is familiar with the main
notions of algorithmic information (Kolmogorov complexity) theory.Comment: Missing proofs adde
Towards a Universal Theory of Artificial Intelligence based on Algorithmic Probability and Sequential Decision Theory
Decision theory formally solves the problem of rational agents in uncertain
worlds if the true environmental probability distribution is known.
Solomonoff's theory of universal induction formally solves the problem of
sequence prediction for unknown distribution. We unify both theories and give
strong arguments that the resulting universal AIXI model behaves optimal in any
computable environment. The major drawback of the AIXI model is that it is
uncomputable. To overcome this problem, we construct a modified algorithm
AIXI^tl, which is still superior to any other time t and space l bounded agent.
The computation time of AIXI^tl is of the order t x 2^l.Comment: 8 two-column pages, latex2e, 1 figure, submitted to ijca
- …